Search results for " Multiplication"

showing 10 items of 49 documents

Micropropagation of Sicilian cultivars with an aim to preserve genetic diversity in hazelnut (Corylus avellana L.)

2018

The use of a small number of cultivars in agriculture can lead to a loss of agrobiodiversity. Since in vitro techniques are valuable tools for conserving plant biodiversity, an efficient micropropagation protocol for four Italian hazelnut cultivars, ‘Carrello’, ‘Ghirara’, ‘Minnulara’, and ‘Panottara’, was developed. The highest axillary bud survival was obtained after decontamination with 40 min 1% sodium hypochlorite followed by 40 min 0.1% sodium merthiolate in ‘Minnulara’ and ‘Ghirara’, while the 35þ35 min treatment was the best for ‘Carrello’ and ‘Panottara’. Shoot multiplication was higher in ‘Minnulara’ and ‘Ghirara’ when 6.6 lM N6-benzyladenine was used, even if some hyperhydric shoo…

0106 biological sciences0301 basic medicineBiodiversityindole-3-butyric acidPlant ScienceBiology01 natural sciences03 medical and health scienceschemistry.chemical_compoundCultivarN6-benzyladenineEcology Evolution Behavior and SystematicsDecontamination timeGenetic diversityshoot multiplicationbusiness.industryIndole-3-butyric acidlanguage.human_languageSettore AGR/03 - Arboricoltura Generale E Coltivazioni ArboreeHorticulture030104 developmental biologyMicropropagationchemistryrooting inductionAgriculturemetatopolinlanguageAgricultural biodiversitybusinessSicilian010606 plant biology & botanyPlant Biosystems - An International Journal Dealing with all Aspects of Plant Biology
researchProduct

FeatherCNN: Fast Inference Computation with TensorGEMM on ARM Architectures

2020

Deep Learning is ubiquitous in a wide field of applications ranging from research to industry. In comparison to time-consuming iterative training of convolutional neural networks (CNNs), inference is a relatively lightweight operation making it amenable to execution on mobile devices. Nevertheless, lower latency and higher computation efficiency are crucial to allow for complex models and prolonged battery life. Addressing the aforementioned challenges, we propose FeatherCNN – a fast inference library for ARM CPUs – targeting the performance ceiling of mobile devices. FeatherCNN employs three key techniques: 1) A highly efficient TensorGEMM (generalized matrix multiplication) routine is app…

020203 distributed computingSource codeIterative methodComputer sciencebusiness.industrymedia_common.quotation_subjectDeep learningInference02 engineering and technologyParallel computingConvolutional neural networkMatrix multiplicationARM architectureComputational Theory and MathematicsHardware and ArchitectureSignal Processing0202 electrical engineering electronic engineering information engineeringArtificial intelligencebusinessmedia_commonIEEE Transactions on Parallel and Distributed Systems
researchProduct

Reverse-safe data structures for text indexing

2021

We introduce the notion of reverse-safe data structures. These are data structures that prevent the reconstruction of the data they encode (i.e., they cannot be easily reversed). A data structure D is called z-reverse-safe when there exist at least z datasets with the same set of answers as the ones stored by D. The main challenge is to ensure that D stores as many answers to useful queries as possible, is constructed efficiently, and has size close to the size of the original dataset it encodes. Given a text of length n and an integer z, we propose an algorithm which constructs a z-reverse-safe data structure that has size O(n) and answers pattern matching queries of length at most d optim…

050101 languages & linguisticsComputer sciencedata structure02 engineering and technologyprivacySet (abstract data type)combinatoric0202 electrical engineering electronic engineering information engineering0501 psychology and cognitive sciencesPattern matchingSettore ING-INF/05 - Sistemi Di Elaborazione Delle InformazionialgorithmSettore INF/01 - Informatica05 social sciencesSearch engine indexingINF/01 - INFORMATICAdata miningData structureMatrix multiplicationcombinatoricsExponent020201 artificial intelligence & image processingdata structure; algorithm; combinatorics; de Bruijn graph; data mining; privacyAlgorithmAdversary modelde Bruijn graphInteger (computer science)
researchProduct

Does the left inferior parietal lobule contribute to multiplication facts?

2005

We report a single case, who presents with a selective and severe impairment for multiplication and division facts. His ability to retrieve subtraction and addition facts was entirely normal. His brain lesion affected the left superior temporal and to lesser extent in the left middle temporal gyri and the left precentral gyrus extending inferiorly to the pars opercularis of the left frontal lobe. Interestingly, the left supramarginal and angular gyri (SMG/AG) were spared. This finding realised a double dissociation with a previously reported patient, who despite lesions in the SMG/AG did not have a multiplication impairment (van Harskamp et al., 2002). The previously suggested crucial role …

AdultMaleCognitive NeuroscienceExperimental and Cognitive PsychologyLeft frontal lobeNeuropsychological Testsbehavioral disciplines and activitiesFunctional LateralityMental ProcessesLeft precentral gyrusParietal LobeHumansNeurologic ExaminationSettore M-PSI/02 - Psicobiologia E Psicologia FisiologicaLeft inferior parietal lobuledyscalculia arithmetical fact retrieval multiplication and division impairment left inferior parietal lobulesupramarginal and angular gyriParietal lobeSubtractionAnatomyMagnetic Resonance ImagingNeuropsychology and Physiological PsychologyReadingBrain InjuriesMultiplicationLeft superiorPsychologyMathematicsCognitive psychologyPars opercularisCortex; a journal devoted to the study of the nervous system and behavior
researchProduct

Complex multiplication, Griffiths-Yukawa couplings, and rigidity for families of hypersurfaces

2003

Let M(d,n) be the moduli stack of hypersurfaces of degree d > n in the complex projective n-space, and let M(d,n;1) be the sub-stack, parameterizing hypersurfaces obtained as a d fold cyclic covering of the projective n-1 space, ramified over a hypersurface of degree d. Iterating this construction, one obtains M(d,n;r). We show that M(d,n;1) is rigid in M(d,n), although the Griffiths-Yukawa coupling degenerates for d<2n. On the other hand, for all d>n the sub-stack M(d,n;2) deforms. We calculate the exact length of the Griffiths-Yukawa coupling over M(d,n;r), and we construct a 4-dimensional family of quintic hypersurfaces, and a dense set of points in the base, where the fibres ha…

Algebra and Number TheoryDegree (graph theory)Mathematics - Complex Variables14D0514J3214D07Complex multiplicationYukawa potentialRigidity (psychology)14J70ModuliCombinatoricsAlgebraMathematics - Algebraic Geometry14J70; 14D05; 14D07; 14J32HypersurfaceMathematics::Algebraic GeometryMathematikFOS: MathematicsGeometry and TopologyComplex Variables (math.CV)Algebraic Geometry (math.AG)Stack (mathematics)Mathematics
researchProduct

Iterative sparse matrix-vector multiplication for accelerating the block Wiedemann algorithm over GF(2) on multi-graphics processing unit systems

2012

SUMMARY The block Wiedemann (BW) algorithm is frequently used to solve sparse linear systems over GF(2). Iterative sparse matrix–vector multiplication is the most time-consuming operation. The necessity to accelerate this step is motivated by the application of BW to very large matrices used in the linear algebra step of the number field sieve (NFS) for integer factorization. In this paper, we derive an efficient CUDA implementation of this operation by using a newly designed hybrid sparse matrix format. This leads to speedups between 4 and 8 on a single graphics processing unit (GPU) for a number of tested NFS matrices compared with an optimized multicore implementation. We further present…

Block Wiedemann algorithmComputer Networks and CommunicationsComputer scienceGraphics processing unitSparse matrix-vector multiplicationGPU clusterParallel computingGF(2)Computer Science ApplicationsTheoretical Computer ScienceGeneral number field sieveMatrix (mathematics)Computational Theory and MathematicsFactorizationLinear algebraMultiplicationComputer Science::Operating SystemsSoftwareInteger factorizationSparse matrixConcurrency and Computation: Practice and Experience
researchProduct

Generalized centro-invertible matrices with applications

2014

Centro-invertible matrices are introduced by R.S. Wikramaratna in 2008. For an involutory matrix R, we define the generalized centro-invertible matrices with respect to R to be those matrices A such that RAR = A^−1. We apply these matrices to a problem in modular arithmetic. Specifically, algorithms for image blurring/deblurring are designed by means of generalized centro-invertible matrices. In addition, if R1 and R2 are n × n involutory matrices, then there is a simple bijection between the set of all centro-invertible matrices with respect to R1 and the set with respect to R2.

Centro-symmetric matrixSquare root of a 2 by 2 matrixApplied MathematicsInvolutory matrixINGENIERIA TELEMATICAMatrius (Matemàtica)Matrix ringMatrix multiplicationCombinatoricsMatrix (mathematics)Integer matrix2 × 2 real matricesCentro-invertible matrixMatrix analysisInvolutory matrixMATEMATICA APLICADAComputer Science::Distributed Parallel and Cluster ComputingMathematics
researchProduct

Fast Matrix Multiplication

2015

Until a few years ago, the fastest known matrix multiplication algorithm, due to Coppersmith and Winograd (1990), ran in time O(n2.3755). Recently, a surge of activity by Stothers, Vassilevska-Williams, and Le~Gall has led to an improved algorithm running in time O(n2.3729). These algorithms are obtained by analyzing higher and higher tensor powers of a certain identity of Coppersmith and Winograd. We show that this exact approach cannot result in an algorithm with running time O(n2.3725), and identify a wide class of variants of this approach which cannot result in an algorithm with running time $O(n^{2.3078}); in particular, this approach cannot prove the conjecture that for every e > 0, …

Class (set theory)Conjecturepeople.profession0102 computer and information sciences02 engineering and technology01 natural sciencesIdentity (music)Matrix multiplicationRunning timeCombinatorics010201 computation theory & mathematicsTensor (intrinsic definition)0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingCoppersmithpeopleMathematicsCoppersmith–Winograd algorithmProceedings of the forty-seventh annual ACM symposium on Theory of Computing
researchProduct

An Scalable matrix computing unit architecture for FPGA and SCUMO user design interface

2019

High dimensional matrix algebra is essential in numerous signal processing and machine learning algorithms. This work describes a scalable square matrix-computing unit designed on the basis of circulant matrices. It optimizes data flow for the computation of any sequence of matrix operations removing the need for data movement for intermediate results, together with the individual matrix operations’ performance in direct or transposed form (the transpose matrix operation only requires a data addressing modification). The allowed matrix operations are: matrix-by-matrix addition, subtraction, dot product and multiplication, matrix-by-vector multiplication, and matrix by scalar multiplication.…

Computer Networks and CommunicationsComputer scienceMathematicsofComputing_NUMERICALANALYSISSistemes informàticslcsh:TK7800-836002 engineering and technologyScalar multiplicationComputational scienceMatrix (mathematics)matrix-computing unitTranspose0202 electrical engineering electronic engineering information engineeringmatrix processorElectrical and Electronic EngineeringCirculant matrixcirculant matricesFPGA020208 electrical & electronic engineeringlcsh:ElectronicsDot productMatrix multiplicationArquitectura d'ordinadorsHardware and ArchitectureControl and Systems Engineeringmatrix arithmeticSignal Processing020201 artificial intelligence & image processingMultiplicationhardware implementation
researchProduct

The Sliced COO Format for Sparse Matrix-Vector Multiplication on CUDA-enabled GPUs

2012

Abstract Existing formats for Sparse Matrix-Vector Multiplication (SpMV) on the GPU are outperforming their corresponding implementations on multi-core CPUs. In this paper, we present a new format called Sliced COO (SCOO) and an effcient CUDA implementation to perform SpMV on the GPU. While previous work shows experiments on small to medium-sized sparse matrices, we perform evaluations on large sparse matrices. We compared SCOO performance to existing formats of the NVIDIA Cusp library. Our resutls on a Fermi GPU show that SCOO outperforms the COO and CSR format for all tested matrices and the HYB format for all tested unstructured matrices. Furthermore, comparison to a Sandy-Bridge CPU sho…

Computer scienceSparse matrix-vector multiplicationCUDAParallel computingMatrix (mathematics)CUDAFactor (programming language)SpMVGeneral Earth and Planetary SciencesMultiplicationcomputerFermiGeneral Environmental Sciencecomputer.programming_languageSparse matrixProcedia Computer Science
researchProduct